skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Roy, Nicholas"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In this paper, we propose a novel method for autonomously seeking out sparsely distributed targets in an unknown underwater environment. Our Sparse Adaptive Search and Sample (SASS) algorithm mixes low-altitude observations of discrete targets with high-altitude observations of the surrounding substrates. By using prior information about the distribution of targets across substrate types in combination with belief modelling over these substrates in the environment, high-altitude observations provide information that allows SASS to quickly guide the robot to areas with high target densities. A maximally informative path is autonomously constructed online using Monte Carlo Tree Search with a novel acquisition function to guide the search to maximise observations of unique targets. We demonstrate our approach in a set of simulated trials using a novel generative species model. SASS consistently outperforms the canonical boustrophedon planner by up to 36% in seeking out unique targets in the first 75 - 90% of time it takes for a boustrophedon survey. Additionally, we verify the performance of SASS on two real world coral reef datasets. 
    more » « less
  2. Free, publicly-accessible full text available February 21, 2026
  3. Seafloor hydrothermalism plays a critical role in fundamental interactions between geochemical and biological processes in the deep ocean. A significant number of hydrothermal vents are hypothesized to exist, but many of these remain undiscovered due in part to the difficulty of detecting hydrothermalism using standard sensors on rosettes towed in the water column or robotic platforms performing surveys. Here, we use in situ methane sensors to complement standard sensing technology for hydrothermalism discovery and compare sensors on a towed rosette and an autonomous underwater vehicle (AUV) during a 17 km long transect in the Northern Guaymas Basin in the Gulf of California. This transect spatially intersected with a known hydrothermally active venting site. These data show that methane signalled possible hydrothermal-activity 1.5–3 km laterally (100–150 m vertically) from a known vent. Methane as a signal for hydrothermalism performed similarly to standard turbidity sensors (plume detection 2.2–3.3 km from reference source), and more sensitively and clearly than temperature, salinity, and oxygen instruments which readily respond to physical mixing in background seawater. We additionally introduce change-point detection algorithms—streaming cross-correlation and regime identification—as a means of real-time hydrothermalism discovery and discuss related data supervision technologies that could be used in planning, executing, and monitoring explorative surveys for hydrothermalism. 
    more » « less
  4. Contemporary approaches to perception, planning, estimation, and control have allowed robots to operate robustly as our remote surrogates in uncertain, unstructured environments. This progress now creates an opportunity for robots to operate not only in isolation, but also with and alongside humans in our complex environments. Realizing this opportunity requires an efficient and flexible medium through which humans can communicate with collaborative robots. Natural language provides one such medium, and through significant progress in statistical methods for natural-language understanding, robots are now able to interpret a diverse array of free-form navigation, manipulation, and mobile-manipulation commands. However, most contemporary approaches require a detailed, prior spatial-semantic map of the robot’s environment that models the space of possible referents of an utterance. Consequently, these methods fail when robots are deployed in new, previously unknown, or partially-observed environments, particularly when mental models of the environment differ between the human operator and the robot. This paper provides a comprehensive description of a novel learning framework that allows field and service robots to interpret and correctly execute natural-language instructions in a priori unknown, unstructured environments. Integral to our approach is its use of language as a “sensor”—inferring spatial, topological, and semantic information implicit in natural-language utterances and then exploiting this information to learn a distribution over a latent environment model. We incorporate this distribution in a probabilistic, language grounding model and infer a distribution over a symbolic representation of the robot’s action space, consistent with the utterance. We use imitation learning to identify a belief-space policy that reasons over the environment and behavior distributions. We evaluate our framework through a variety of different navigation and mobile-manipulation experiments involving an unmanned ground vehicle, a robotic wheelchair, and a mobile manipulator, demonstrating that the algorithm can follow natural-language instructions without prior knowledge of the environment. 
    more » « less
  5. The goal of this article is to enable robots to perform robust task execution following human instructions in partially observable environments. A robot’s ability to interpret and execute commands is fundamentally tied to its semantic world knowledge. Commonly, robots use exteroceptive sensors, such as cameras or LiDAR, to detect entities in the workspace and infer their visual properties and spatial relationships. However, semantic world properties are often visually imperceptible. We posit the use of non-exteroceptive modalities including physical proprioception, factual descriptions, and domain knowledge as mechanisms for inferring semantic properties of objects. We introduce a probabilistic model that fuses linguistic knowledge with visual and haptic observations into a cumulative belief over latent world attributes to infer the meaning of instructions and execute the instructed tasks in a manner robust to erroneous, noisy, or contradictory evidence. In addition, we provide a method that allows the robot to communicate knowledge dissonance back to the human as a means of correcting errors in the operator’s world model. Finally, we propose an efficient framework that anticipates possible linguistic interactions and infers the associated groundings for the current world state, thereby bootstrapping both language understanding and generation. We present experiments on manipulators for tasks that require inference over partially observed semantic properties, and evaluate our framework’s ability to exploit expressed information and knowledge bases to facilitate convergence, and generate statements to correct declared facts that were observed to be inconsistent with the robot’s estimate of object properties. 
    more » « less